26 research outputs found

    Improved approximation for Fr\'echet distance on c-packed curves matching conditional lower bounds

    Full text link
    The Fr\'echet distance is a well-studied and very popular measure of similarity of two curves. The best known algorithms have quadratic time complexity, which has recently been shown to be optimal assuming the Strong Exponential Time Hypothesis (SETH) [Bringmann FOCS'14]. To overcome the worst-case quadratic time barrier, restricted classes of curves have been studied that attempt to capture realistic input curves. The most popular such class are c-packed curves, for which the Fr\'echet distance has a (1+ϵ)(1+\epsilon)-approximation in time O~(cn/ϵ)\tilde{O}(c n /\epsilon) [Driemel et al. DCG'12]. In dimension d5d \ge 5 this cannot be improved to O((cn/ϵ)1δ)O((cn/\sqrt{\epsilon})^{1-\delta}) for any δ>0\delta > 0 unless SETH fails [Bringmann FOCS'14]. In this paper, exploiting properties that prevent stronger lower bounds, we present an improved algorithm with runtime O~(cn/ϵ)\tilde{O}(cn/\sqrt{\epsilon}). This is optimal in high dimensions apart from lower order factors unless SETH fails. Our main new ingredients are as follows: For filling the classical free-space diagram we project short subcurves onto a line, which yields one-dimensional separated curves with roughly the same pairwise distances between vertices. Then we tackle this special case in near-linear time by carefully extending a greedy algorithm for the Fr\'echet distance of one-dimensional separated curves

    Improved Protocols and Hardness Results for the Two-Player Cryptogenography Problem

    Full text link
    The cryptogenography problem, introduced by Brody, Jakobsen, Scheder, and Winkler (ITCS 2014), is to collaboratively leak a piece of information known to only one member of a group (i)~without revealing who was the origin of this information and (ii)~without any private communication, neither during the process nor before. Despite several deep structural results, even the smallest case of leaking one bit of information present at one of two players is not well understood. Brody et al.\ gave a 2-round protocol enabling the two players to succeed with probability 1/31/3 and showed the hardness result that no protocol can give a success probability of more than~3/83/8. In this work, we show that neither bound is tight. Our new hardness result, obtained by a different application of the concavity method used also in the previous work, states that a success probability better than 0.3672 is not possible. Using both theoretical and numerical approaches, we improve the lower bound to 0.33840.3384, that is, give a protocol leading to this success probability. To ease the design of new protocols, we prove an equivalent formulation of the cryptogenography problem as solitaire vector splitting game. Via an automated game tree search, we find good strategies for this game. We then translate the splits that occurred in this strategy into inequalities relating position values and use an LP solver to find an optimal solution for these inequalities. This gives slightly better game values, but more importantly, it gives a more compact representation of the protocol and a way to easily verify the claimed quality of the protocol. These improved bounds, as well as the large sizes and depths of the improved protocols we find, suggests that finding good protocols for the cryptogenography problem as well as understanding their structure are harder than what the simple problem formulation suggests

    Multivariate Fine-Grained Complexity of Longest Common Subsequence

    Full text link
    We revisit the classic combinatorial pattern matching problem of finding a longest common subsequence (LCS). For strings xx and yy of length nn, a textbook algorithm solves LCS in time O(n2)O(n^2), but although much effort has been spent, no O(n2ε)O(n^{2-\varepsilon})-time algorithm is known. Recent work indeed shows that such an algorithm would refute the Strong Exponential Time Hypothesis (SETH) [Abboud, Backurs, Vassilevska Williams + Bringmann, K\"unnemann FOCS'15]. Despite the quadratic-time barrier, for over 40 years an enduring scientific interest continued to produce fast algorithms for LCS and its variations. Particular attention was put into identifying and exploiting input parameters that yield strongly subquadratic time algorithms for special cases of interest, e.g., differential file comparison. This line of research was successfully pursued until 1990, at which time significant improvements came to a halt. In this paper, using the lens of fine-grained complexity, our goal is to (1) justify the lack of further improvements and (2) determine whether some special cases of LCS admit faster algorithms than currently known. To this end, we provide a systematic study of the multivariate complexity of LCS, taking into account all parameters previously discussed in the literature: the input size n:=max{x,y}n:=\max\{|x|,|y|\}, the length of the shorter string m:=min{x,y}m:=\min\{|x|,|y|\}, the length LL of an LCS of xx and yy, the numbers of deletions δ:=mL\delta := m-L and Δ:=nL\Delta := n-L, the alphabet size, as well as the numbers of matching pairs MM and dominant pairs dd. For any class of instances defined by fixing each parameter individually to a polynomial in terms of the input size, we prove a SETH-based lower bound matching one of three known algorithms. Specifically, we determine the optimal running time for LCS under SETH as (n+min{d,δΔ,δm})1±o(1)(n+\min\{d, \delta \Delta, \delta m\})^{1\pm o(1)}. [...]Comment: Presented at SODA'18. Full Version. 66 page

    Smoothed approximation ratio of the 2-opt heuristic for the TSP

    Get PDF
    The 2-Opt heuristic is a simple, easy-to-implement local search heuristic for the traveling salesman problem. While it usually provides good approximations to the optimal tour in experiments, its worst-case performance is poor. In an attempt to explain the approximation performance of 2-Opt, we prove an upper bound of exp(O(sqrt(log(1/sigma))) for the smoothed approximation ratio of 2-Opt. As a lower bound, we prove that the worst-case lower bound of Omega(log n/log log n) for the approximation ratio holds for sigma = O(1/ sqrt(n)).\ud Our main technical novelty is that, different from existing smoothed analyses, we do not separately analyze objective values of the global and the local optimum on all inputs, but simultaneously bound them on the same input

    Quasirandom Rumor Spreading: An Experimental Analysis

    Full text link
    We empirically analyze two versions of the well-known "randomized rumor spreading" protocol to disseminate a piece of information in networks. In the classical model, in each round each informed node informs a random neighbor. In the recently proposed quasirandom variant, each node has a (cyclic) list of its neighbors. Once informed, it starts at a random position of the list, but from then on informs its neighbors in the order of the list. While for sparse random graphs a better performance of the quasirandom model could be proven, all other results show that, independent of the structure of the lists, the same asymptotic performance guarantees hold as for the classical model. In this work, we compare the two models experimentally. This not only shows that the quasirandom model generally is faster, but also that the runtime is more concentrated around the mean. This is surprising given that much fewer random bits are used in the quasirandom process. These advantages are also observed in a lossy communication model, where each transmission does not reach its target with a certain probability, and in an asynchronous model, where nodes send at random times drawn from an exponential distribution. We also show that typically the particular structure of the lists has little influence on the efficiency.Comment: 14 pages, appeared in ALENEX'0

    Tight(er) bounds for similarity measures, smoothed approximation and broadcasting

    Get PDF
    In this thesis, we prove upper and lower bounds on the complexity of sequence similarity measures, the approximability of geometric problems on realistic inputs, and the performance of randomized broadcasting protocols. The first part approaches the question why a number of fundamental polynomial-time problems - specifically, Dynamic Time Warping, Longest Common Subsequence (LCS), and the Levenshtein distance - resists decades-long attempts to obtain polynomial improvements over their simple dynamic programming solutions. We prove that any (strongly) subquadratic algorithm for these and related sequence similarity measures would refute the Strong Exponential Time Hypothesis (SETH). Focusing particularly on LCS, we determine a tight running time bound (up to lower order factors and conditional on SETH) when the running time is expressed in terms of all input parameters that have been previously exploited in the extensive literature. In the second part, we investigate the approximation performance of the popular 2-Opt heuristic for the Traveling Salesperson Problem using the smoothed analysis paradigm. For the Fréchet distance, we design an improved approximation algorithm for the natural input class of c-packed curves, matching a conditional lower bound. Finally, in the third part we prove tighter performance bounds for processes that disseminate a piece of information, either as quickly as possible (rumor spreading) or as anonymously as possible (cryptogenography).Die vorliegende Dissertation beweist obere und untere Schranken an die Komplexität von Sequenzähnlichkeitsmaßen, an die Approximierbarkeit geometrischer Probleme auf realistischen Eingaben und an die Effektivität randomisierter Kommunikationsprotokolle. Der erste Teil befasst sich mit der Frage, warum für eine Vielzahl fundamentaler Probleme im Polynomialzeitbereich - insbesondere für das Dynamic-Time-Warping, die längste gemeinsame Teilfolge (LCS) und die Levenshtein-Distanz - seit Jahrzehnten keine Algorithmen gefunden werden konnten, die polynomiell schneller sind als ihre einfachen Lösungen mittels dynamischer Programmierung. Wir zeigen, dass ein (im strengen Sinne) subquadratischer Algorithmus für diese und verwandte Ähnlichkeitsmaße die starke Exponentialzeithypothese (SETH) widerlegen würde. Für LCS zeigen wir eine scharfe Schranke an die optimale Laufzeit (unter der SETH und bis auf Faktoren niedrigerer Ordnung) in Abhängigkeit aller bisher untersuchten Eingabeparameter. Im zweiten Teil untersuchen wir die Approximationsgüte der klassischen 2-Opt-Heuristik für das Problem des Handlungsreisenden anhand des Smoothed-Analysis-Paradigmas. Weiterhin entwickeln wir einen verbesserten Approximationsalgorithmus für die Fréchet-Distanz auf einer Klasse natürlicher Eingaben. Der letzte Teil beweist neue Schranken für die Effektivität von Prozessen, die Informationen entweder so schnell wie möglich (Rumor-Spreading) oder so anonym wie möglich (Kryptogenografie) verbreiten
    corecore